从大脑活动中解码图像一直是一个挑战。由于深度学习的发展,有可用的工具可以解决此问题。解码图像旨在将神经尖峰列车映射到低级视觉特征和高级语义信息空间。最近,有一些关于从尖峰列车解码的研究,但是,这些研究更少关注神经科学的基础,很少有研究将接受场合并为视觉图像重建。在本文中,我们提出了一种具有生物学特性的深度学习神经网络体系结构,以从尖峰火车中重建视觉图像。据我们所知,我们实施了一种将接收场属性矩阵集成到损失函数中的方法。我们的模型是从神经尖峰火车到图像的端到端解码器。我们不仅将Gabor过滤器合并到自动编码器中,该自动编码器用于生成图像,还提出了具有接收场特性的损失函数。我们在两个数据集上评估了我们的解码器,这些数据集包含猕猴的一级视觉皮层神经尖峰和sal虫视网膜神经节细胞(RGC)峰值。我们的结果表明,我们的方法可以有效地结合感受的特征以重建图像,从而根据神经信息提供一种新的视觉重建方法。
translated by 谷歌翻译
低光图像噪声和色差的问题是对象检测,语义分割,实例分割等任务的挑战性问题。在本文中,我们提出了用于低照明增强的算法。KIND-LE使用网络结构中的光曲线估计模块来增强视网膜分解图像中的照明图,从而改善图像亮度。我们提出了照明图和反射图融合模块,以恢复恢复的图像细节并减少细节损失。最后,我们包括了消除噪声的总变化损失函数。我们的方法将GLADNET数据集用作训练集,而LOL数据集则是测试集,并使用Exdark作为下游任务的数据集进行了验证。基准上的广泛实验证明了我们方法的优势,并且接近最先进的结果,该结果的PSNR为19.7216,SSIM在指标方面为0.8213。
translated by 谷歌翻译
关于组合优化的机器学习的最新作品表明,基于学习的方法可以优于速度和性能方面的启发式方法。在本文中,我们考虑了在定向的无环图上找到最佳拓扑顺序的问题,重点是编译器中出现的记忆最小化问题。我们提出了一种基于端到端的机器学习方法,用于使用编码器框架,用于拓扑排序。我们的编码器是一种基于注意力的新图形神经网络体系结构,称为\ emph {topoformer},它使用DAG的不同拓扑转换来传递消息。由编码器产生的节点嵌入被转换为节点优先级,解码器使用这些嵌入,以生成概率分布对拓扑顺序。我们在称为分层图的合成生成图的数据集上训练我们的模型。我们表明,我们的模型的表现优于或在PAR上,具有多个拓扑排序基线,同时在最多2K节点的合成图上明显更快。我们还在一组现实世界计算图上训练和测试我们的模型,显示了性能的改进。
translated by 谷歌翻译
视觉地位识别是自主驾驶导航和移动机器人定位等应用的具有挑战性的任务。分散注意力在复杂的场景中呈现的元素经常导致视觉场所的感知偏差。为了解决这个问题,必须将信息与任务相关区域中的信息集成到图像表示中至关重要。在本文中,我们介绍了一种基于视觉变压器的新型整体地点识别模型,TransVPR。它受益于变形金刚的自我关注操作的理想性能,这可以自然地聚合任务相关的特征。从多个级别的变压器的关注,重点关注不同的感兴趣区域,以产生全球图像表示。另外,由熔融注意掩模过滤的变压器层的输出令牌被认为是密钥贴片描述符,用于执行空间匹配以重新排名通过全局图像特征检索的候选。整个模型允许具有单个目标和图像级监控的端到端培训。 TransVPR在几个现实世界基准上实现最先进的性能,同时保持低计算时间和存储要求。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译